CSCI - B 609 : A Theorist ’ s Toolkit , Fall 2016 Nov 8 Lecture 20 : LP Relaxation and Approximation Algorithms

نویسندگان

  • Yuan Zhou
  • Syed Mahbub Hafiz
چکیده

When variables of constraints of an 0−1 Integer Linear Program (ILP) is replaced by a weaker constraint such that each variable belongs to the interval [0, 1] (i.e. decimal) instead of fixed values {0, 1} (integers); then this relaxation generates a new Linear Programming problem. For example, the constraint xi ∈ {0, 1} becomes 0 ≤ xi ≤ 1. This relaxation technique is useful to convert an NP-hard optimization problem (ILP) into a related problem that is solvable in polynomial time. Moreover, the solution to the relaxed linear program can be employed to acquire information about the optimal solution to the original problem. On the other hand, Approximation Algorithms are algorithms used to find approximate solutions to the optimization problems. Linear programming relaxation is an established technique for designing such approximation algorithms for the NP-hard optimization problems (ILP).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Theorist ’ s Toolkit , Fall 2016 Nov 3 Lecture 19 : Solving Linear Programs

In Leture 18, we have talked about Linear Programming (LP). LP refers to the following problem. We are given an input of the following m constraints (inequalities):

متن کامل

A Theorist ’ s Toolkit ( CMU 18 - 859 T , Fall 2013 ) Lecture 16 : Constraint Satisfaction Problems 10 / 30 / 2013

First let’s make sure this is actually a relaxation of our original problem. To see this, consider an optimal cut F ∗ : V → {1,−1}. Then, if we let ~ vi = (F (vi), 0, . . . , 0), all of the constraints are satisfied and the objective value remains the same. So, any solution to the original problem is also a solution to this vector program and therefore we have a relaxation. We will use SDPOpt t...

متن کامل

Advanced Approximation Algorithms ( CMU 18 - 854 B , Spring 2008 ) Lecture 27 : Algorithms for Sparsest Cut Apr 24 , 2008

In lecture 19, we saw an LP relaxation based algorithm to solve the sparsest cut problem with an approximation guarantee of O(logn). In this lecture, we will show that the integrality gap of the LP relaxation is O(logn) and hence this is the best approximation factor one can get via the LP relaxation. We will also start developing an SDP relaxation based algorithm which provides an O( √ log n) ...

متن کامل

Approximation Algorithms Fall Semester , 2003 Lecture 6 : Sept 17 , 2003

i:x̄i∈Cj (1− yi) ≥ zj for all j 0 ≤ yi ≤ 1 for all i 0 ≤ zj ≤ 1 for all j. We were investigating how well the algorithm does, that sets xi to true with probability yi, independently of the other xi’s, where y is an optimal solution to the lp-relaxation. Without loss of generality, consider the clause Cj = (x1 ∨ x2 ∨ · · · ∨ xk): P (Cj is not satisfied) = P ( x1, x2, . . . , xk are not satisfied)...

متن کامل

A Theorist ’ s Toolkit ( CMU 18 - 859 T , Fall 2013 ) Lecture 03 : Chernoff / Tail Bounds

where Φ(t) is the probability that the Gaussian distribution is at most t and t ≥ 1. This tells us that the probability that we exceed the standard deviation decreases very rapidly. As an example, consider t = 10 √ lnn. Then we get Pr [ H ≥ n 2 + t √ n 2 ] ≤ e−50 lnn = 1 n50 . However, the error term of our above approximation is ±O( 1 √ n ), which is important in this example because the error...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016